110 research outputs found

    Blowup solutions and their blowup rates for parabolic equations with non-standard growth conditions

    Get PDF
    This paper concerns classical solutions for homogeneous Dirichlet problem of parabolic equations coupled via exponential sources involving variable exponents. We first establish blowup criteria for positive solutions. And then, for radial solutions, we obtain optimal classification for simultaneous and non-simultaneous blowup, which is represented by using the maxima of the involved variable exponents. At last, all kinds of blowup rates are determined for both simultaneous and non-simultaneous blowup solutions

    Non-simultaneous blow-up of n components for nonlinear parabolic systems

    Get PDF
    AbstractThis paper deals with non-simultaneous and simultaneous blow-up for radially symmetric solution (u1,u2,…,un) to heat equations coupled via nonlinear boundary ∂ui∂η=uipiui+1qi+1 (i=1,2,…,n). It is proved that there exist suitable initial data such that ui (i∈{1,2,…,n}) blows up alone if and only if qi+1<pi. All of the classifications on the existence of only two components blowing up simultaneously are obtained. We find that different positions (different values of k, i, n) of ui−k and ui leads to quite different blow-up rates. It is interesting that different initial data lead to different blow-up phenomena even with the same requirements on exponent parameters. We also propose that ui−k,ui−k+1,…,ui (i∈{1,2,…,n},k∈{0,1,2,…,n−1}) blow up simultaneously while the other ones remain bounded in different exponent regions. Moreover, the blow-up rates and blow-up sets are obtained

    A Generative Adversarial Approach for Zero-Shot Learning from Noisy Texts

    Full text link
    Most existing zero-shot learning methods consider the problem as a visual semantic embedding one. Given the demonstrated capability of Generative Adversarial Networks(GANs) to generate images, we instead leverage GANs to imagine unseen categories from text descriptions and hence recognize novel classes with no examples being seen. Specifically, we propose a simple yet effective generative model that takes as input noisy text descriptions about an unseen class (e.g.Wikipedia articles) and generates synthesized visual features for this class. With added pseudo data, zero-shot learning is naturally converted to a traditional classification problem. Additionally, to preserve the inter-class discrimination of the generated features, a visual pivot regularization is proposed as an explicit supervision. Unlike previous methods using complex engineered regularizers, our approach can suppress the noise well without additional regularization. Empirically, we show that our method consistently outperforms the state of the art on the largest available benchmarks on Text-based Zero-shot Learning.Comment: To appear in CVPR1

    OOGAN: Disentangling GAN with One-Hot Sampling and Orthogonal Regularization

    Full text link
    Exploring the potential of GANs for unsupervised disentanglement learning, this paper proposes a novel GAN-based disentanglement framework with One-Hot Sampling and Orthogonal Regularization (OOGAN). While previous works mostly attempt to tackle disentanglement learning through VAE and seek to implicitly minimize the Total Correlation (TC) objective with various sorts of approximation methods, we show that GANs have a natural advantage in disentangling with an alternating latent variable (noise) sampling method that is straightforward and robust. Furthermore, we provide a brand-new perspective on designing the structure of the generator and discriminator, demonstrating that a minor structural change and an orthogonal regularization on model weights entails an improved disentanglement. Instead of experimenting on simple toy datasets, we conduct experiments on higher-resolution images and show that OOGAN greatly pushes the boundary of unsupervised disentanglement.Comment: AAAI 202

    Common Diffusion Noise Schedules and Sample Steps are Flawed

    Full text link
    We discover that common diffusion noise schedules do not enforce the last timestep to have zero signal-to-noise ratio (SNR), and some implementations of diffusion samplers do not start from the last timestep. Such designs are flawed and do not reflect the fact that the model is given pure Gaussian noise at inference, creating a discrepancy between training and inference. We show that the flawed design causes real problems in existing implementations. In Stable Diffusion, it severely limits the model to only generate images with medium brightness and prevents it from generating very bright and dark samples. We propose a few simple fixes: (1) rescale the noise schedule to enforce zero terminal SNR; (2) train the model with v prediction; (3) change the sampler to always start from the last timestep; (4) rescale classifier-free guidance to prevent over-exposure. These simple changes ensure the diffusion process is congruent between training and inference and allow the model to generate samples more faithful to the original data distribution
    • …
    corecore